3,634 research outputs found
Parallel image compression
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed
Experimental Progress in Computation by Self-Assembly of DNA Tilings
Approaches to DNA-based computing by self-assembly require the
use of D. T A nanostructures, called tiles, that have efficient chemistries, expressive
computational power: and convenient input and output (I/O) mechanisms.
We have designed two new classes of DNA tiles: TAO and TAE, both
of which contain three double-helices linked by strand exchange. Structural
analysis of a TAO molecule has shown that the molecule assembles efficiently
from its four component strands. Here we demonstrate a novel method for
I/O whereby multiple tiles assemble around a single-stranded (input) scaffold
strand. Computation by tiling theoretically results in the formation of structures
that contain single-stranded (output) reported strands, which can then
be isolated for subsequent steps of computation if necessary. We illustrate the
advantages of TAO and TAE designs by detailing two examples of massively
parallel arithmetic: construction of complete XOR and addition tables by linear
assemblies of DNA tiles. The three helix structures provide flexibility for
topological routing of strands in the computation: allowing the implementation
of string tile models
Parallel Batch-Dynamic Graph Connectivity
In this paper, we study batch parallel algorithms for the dynamic
connectivity problem, a fundamental problem that has received considerable
attention in the sequential setting. The most well known sequential algorithm
for dynamic connectivity is the elegant level-set algorithm of Holm, de
Lichtenberg and Thorup (HDT), which achieves amortized time per
edge insertion or deletion, and time per query. We
design a parallel batch-dynamic connectivity algorithm that is work-efficient
with respect to the HDT algorithm for small batch sizes, and is asymptotically
faster when the average batch size is sufficiently large. Given a sequence of
batched updates, where is the average batch size of all deletions, our
algorithm achieves expected amortized work per
edge insertion and deletion and depth w.h.p. Our algorithm
answers a batch of connectivity queries in expected
work and depth w.h.p. To the best of our knowledge, our algorithm
is the first parallel batch-dynamic algorithm for connectivity.Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Dynamics of Sub-urbanisation - The Growing Periphery of the Metropolis. Berlin 1890 - 2000
What controls the secular process of sub-urbanisation of Berlin to garden and satellite cities and what were the effects? Through the massive retreat of its wealthy and academic bourgeoisie from the centre, Berlin became a bipolar city. On this side of the "railway ring", stood the "stony Berlin" of tenement blocks, the "largest mietskasernen city in the world"(Julius Posener). On the other side the "largest villa city in the world" was growing. The concept of the "green" city had some positive influence and brought a long-term "moral mission" of the upper middle classes into the inner-city. The pre-war villa settlements were an effective laboratory for the middle class dream of owning a house and a garden on the green and healthy outskirts of the city. In the competition between the political systems after the War, the GDR ran with an inner-city housing development, which unlike the prevailing "spacious green city" idea in West Berlin, had to remain true to the old city structure. Recently some urban planners and sociologists, looking at suburbia in a positive sense, using concepts like 'net city' or 'edge city', have accentuated the autonomy of suburbia. Whether this suburban mix contains the future of city development, remains to be seen
Efficient VLSI fault simulation
AbstractLet C be an acyclic Boolean circuit with n gates and ≤ n inputs. A circuit manufacture error may result in a “Stuck-at” (S-A) fault in a circuit identical to C except a gate v only outputs a fixed Boolean value. The S-A fault simulation problem for C is to determine all possible (S-A) faults which can be detected (i.e., faults circuit and C would give distinct outputs) by a given test pattern input.We consider the case where C is a tree (i.e., has fan-out 1.)We give a practical algorithm for fault simulation which simultaneously determines all detectable S-A faults for every gate in the circuit tree C. Our algorithm required only the evaluation of a circuit FS(C) which has ≤ 7n gates and has depth ≤ 3(d + 1), when d is the depth of C. Thus the sequential time of our algorithm is ≤ 7n, and the parallel time is ≤ 3(d + 1). Furthermore, FS(C) requires only a small constant factor more VLSI area than does the original circuit C.We also extend our results to get efficient methods for fault simulation of oblivious VLSI circuits with feedback lines
Semantic Web technologies in software engineering
Over the years, the software engineering community has developed various tools to support the specification, development, and maintainance of software. Many of these tools use proprietary data formats to store artifacts which hamper interoperability. However, the Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Ontologies are used define the concepts in the domain of discourse and their relationships and as such provide the formal vocabulary applications use to exchange data. Beside the Web, the technologies developed for the Semantic Web have proven to be useful also in other domains, especially when data is exchanged between applications from different parties. Software engineering is one of these domains in which recent research shows that Semantic Web technologies are able to reduce the barriers of proprietary data formats and enable interoperability.
In this tutorial, we present Semantic Web technologies and their application in software engineering. We discuss the current status of ontologies for software entities, bug reports, or change requests, as well as semantic representations for software and its documentation. This way, architecture, design, code, or test models can be shared across application boundaries enabling a seamless integration of engineering results
Construction, analysis, ligation, and self-assembly of DNA triple crossover complexes
This paper extends the study and prototyping of unusual DNA motifs, unknown in nature, but founded
on principles derived from biological structures. Artificially designed DNA complexes show promise as building
blocks for the construction of useful nanoscale structures, devices, and computers. The DNA triple crossover
(TX) complex described here extends the set of experimentally characterized building blocks. It consists of
four oligonucleotides hybridized to form three double-stranded DNA helices lying in a plane and linked by
strand exchange at four immobile crossover points. The topology selected for this TX molecule allows for the
presence of reporter strands along the molecular diagonal that can be used to relate the inputs and outputs of
DNA-based computation. Nucleotide sequence design for the synthetic strands was assisted by the application
of algorithms that minimize possible alternative base-pairing structures. Synthetic oligonucleotides were purified,
stoichiometric mixtures were annealed by slow cooling, and the resulting DNA structures were analyzed by
nondenaturing gel electrophoresis and heat-induced unfolding. Ferguson analysis and hydroxyl radical
autofootprinting provide strong evidence for the assembly of the strands to the target TX structure. Ligation
of reporter strands has been demonstrated with this motif, as well as the self-assembly of hydrogen-bonded
two-dimensional crystals in two different arrangements. Future applications of TX units include the construction
of larger structures from multiple TX units, and DNA-based computation. In addition to the presence of reporter
strands, potential advantages of TX units over other DNA structures include space for gaps in molecular arrays,
larger spatial displacements in nanodevices, and the incorporation of well-structured out-of-plane components
in two-dimensional arrays
Negative Interactions in Irreversible Self-Assembly
This paper explores the use of negative (i.e., repulsive) interaction the
abstract Tile Assembly Model defined by Winfree. Winfree postulated negative
interactions to be physically plausible in his Ph.D. thesis, and Reif, Sahu,
and Yin explored their power in the context of reversible attachment
operations. We explore the power of negative interactions with irreversible
attachments, and we achieve two main results. Our first result is an
impossibility theorem: after t steps of assembly, Omega(t) tiles will be
forever bound to an assembly, unable to detach. Thus negative glue strengths do
not afford unlimited power to reuse tiles. Our second result is a positive one:
we construct a set of tiles that can simulate a Turing machine with space bound
s and time bound t, while ensuring that no intermediate assembly grows larger
than O(s), rather than O(s * t) as required by the standard Turing machine
simulation with tiles
An efficient output-sensitive hidden surface removal algorithm and its parallelization
In this paper we present an algorithm for hidden surface removal for a class of polyhedral surfaces which have a property that they can be ordered relatively quickly like the terrain maps. A distinguishing feature of this algorithm is that its running time is sensitive to the actual size of the visible image rather than the total number of intersections in the image plane which can be much larger than the visible image. The time complexity of this algorithm is O((k +nflognloglogn) where n and k are respectively the input and the output sizes. Thus, in a significant number of situations this will be faster than the worst case optimal algorithms which have running time Ω(n 2) irrespective of the output size (where as the output size k is O(n 2) only in the worst case). We also present a parallel algorithm based on a similar approach which runs in time O(log4(n+k)) using O((n + k)/Iog(n+k)) processors in a CREW PRAM model. All our bounds arc obtained using ammortized analysis
- …